A "Thermal" Perceptron Learning Rule

نویسنده

  • Marcus R. Frean
چکیده

The thermal perceptron is a simple extension to Rosenblatt’s perceptron learning rule for training individual linear threshold units. It finds stable weights for nonseparable problems as well as separable ones. Experiments indicate that if a good initial setting for a temperature parameter, To, has been found, then the thermal perceptron outperforms the Pocket algorithm and methods based on gradient descent. The learning rule stabilizes the weights (learns) over a fixed training period. For separable problems it finds separating weights much more quickly than the usual rules.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Efficient VLSI Implementation Based On Constructive Neural Network Algorithms

C-Mantec is a new neural network algorithm that adds competition between neurons with thermal perceptron learning rule. The neuron learning is ruled by the thermal perceptron rule that guarantees the stability of the learnt information while the architecture increases and while the neurons involve in competition for newly incoming data. Competing makes it possible that even after new neurons ha...

متن کامل

Statistical Mechanics of On-line Ensemble Teacher Learning through a Novel Perceptron Learning Rule

In ensemble teacher learning, ensemble teachers have only uncertain information about the true teacher, and this information is given by an ensemble consisting of an infinite number of ensemble teachers whose variety is sufficiently rich. In this learning, a student learns from an ensemble teacher that is iteratively selected randomly from a pool of many ensemble teachers. An interesting point ...

متن کامل

Multi-state perceptrons: learning rule and perceptron of maximal stability

A new perceptron learning rule which works with multilayer neural networks made of multi-state units is obtained, and the corresponding convergence theorem is proved. The deenition of perceptron of maximal stability is enlarged in order to include these new multi-state perceptrons, and a proof of existence and uniqueness of such optimal solutions is outlined.

متن کامل

Statistical Mechanics of Adaptive Weight Perturbation Learning

The weight perturbation learning was proposed as a learning rule which adds perturbation to the variable parameters of learning machines. Generalization performance of the weight perturbation learning was analyzed by statistical mechanical methods. The weight perturbation learning has the same asymptotic generalization property as the Perceptron learning. In this paper we consider difference be...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Neural Computation

دوره 4  شماره 

صفحات  -

تاریخ انتشار 1992